316 research outputs found

    Special Section on Attacking and Protecting Artificial Intelligence

    Get PDF
    Modern artificial intelligence systems largely rely on advanced algorithms, including machine learning techniques such as deep learning. The research community has invested significant efforts in understanding these algorithms, optimally tuning them, and improving their performance, but it has mostly neglected the security facet of the problem. Recent attacks and exploits demonstrated that machine learning-based algorithms are susceptible to attacks targeting computer systems, including backdoors, hardware trojans and fault attacks, but are also susceptible to a range of attacks specifically targeting them, such as adversarial input perturbations. Implementations of machine learning algorithms are often crucial proprietary assets for companies thus need to be protected. It follows that implementations of artificial intelligence-based algorithms are an attractive target for piracy and illegitimate use and, as such, they need to be protected as all other IPs. This is equally important for machine learning algorithms running on remote servers vulnerable to micro-architectural exploits.Published versio

    Feature Classification for Robust Shape-Based Collaborative Tracking and Model Updating

    Get PDF
    Abstract A new collaborative tracking approach is introduced which takes advantage of classified features. The core of this tracker is a single tracker that is able to detect occlusions and classify features contributing in localizing the object. Features are classified in four classes: good, suspicious, malicious, and neutral. Good features are estimated to be parts of the object with a high degree of confidence. Suspicious ones have a lower, yet significantly high, degree of confidence to be a part of the object. Malicious features are estimated to be generated by clutter, while neutral features are characterized with not a sufficient level of uncertainty to be assigned to the tracked object. When there is no occlusion, the single tracker acts alone, and the feature classification module helps it to overcome distracters such as still objects or little clutter in the scene. When more than one desired moving objects bounding boxes are close enough, the collaborative tracker is activated and it exploits the advantages of the classified features to localize each object precisely as well as updating the objects shape models more precisely by assigning again the classified features to the objects. The experimental results show successful tracking compared with the collaborative tracker that does not use the classified features. Moreover, more precise updated object shape models will be shown

    Exploring Parallelism to Improve the Performance of FrodoKEM in Hardware

    Get PDF
    FrodoKEM is a lattice-based key encapsulation mechanism, currently a semi-finalist in NIST’s post-quantum standardisation effort. A condition for these candidates is to use NIST standards for sources of randomness (i.e. seed-expanding), and as such most candidates utilise SHAKE, an XOF defined in the SHA-3 standard. However, for many of the candidates, this module is a significant implementation bottleneck. Trivium is a lightweight, ISO standard stream cipher which performs well in hardware and has been used in previous hardware designs for lattice-based cryptography. This research proposes optimised designs for FrodoKEM, concentrating on high throughput by parallelising the matrix multiplication operations within the cryptographic scheme. This process is eased by the use of Trivium due to its higher throughput and lower area consumption. The parallelisations proposed also complement the addition of first-order masking to the decapsulation module. Overall, we significantly increase the throughput of FrodoKEM; for encapsulation we see a 16 × speed-up, achieving 825 operations per second, and for decapsulation we see a 14 × speed-up, achieving 763 operations per second, compared to the previous state of the art, whilst also maintaining a similar FPGA area footprint of less than 2000 slices.</p

    A fast cardiac electromechanics model coupling the Eikonal and the nonlinear mechanics equations

    Get PDF
    We present a new model of human cardiac electromechanics for the left ventricle where electrophysiology is described by a Reaction-Eikonal model and which enables an off-line resolution of the reaction model, thus entailing a big saving of computational time. Subcellular dynamics is coupled with a model of tissue mechanics, which is in turn coupled with a Windkessel model for blood circulation. Our numerical results show that the proposed model is able to provide a physiological response to changes in certain variables (end-diastolic volume, total peripheral resistance, contractility). We also show that our model is able to reproduce with high accuracy and with a considerably lower computational time the results that we would obtain if the monodomain model should be used in place of the Eikonal model

    CCSW '22: The 2022 cloud computing security workshop

    Get PDF
    Clouds and massive-scale computing infrastructures are starting to dominate computing and will likely continue to do so for the foreseeable future. Major cloud operators are now comprising millions of cores hosting substantial fractions of corporate and government IT infrastructure. CCSW is the world's premier forum bringing together researchers and practitioners in all security aspects of cloud-centric and outsourced computing, including: ·Side channel attacks ·Cryptographic protocols for cloud security ·Secure cloud resource virtualization mechanisms ·Secure data management outsourcing (e.g., database as a service) ·Privacy and integrity mechanisms for outsourcing ·Foundations of cloud-centric threat models ·Secure computation outsourcing ·Remote attestation mechanisms in clouds ·Sandboxing and VM-based enforcements ·Trust and policy management in clouds ·Secure identity management mechanisms ·Cloud-aware web service security paradigms and mechanisms ·Cloud-centric regulatory compliance issues and mechanisms ·Business and security risk models and clouds ·Cost and usability models and their interaction with security in clouds ·Scalability of security in global-size clouds ·Binary analysis of software for remote attestation and cloud protection ·Network security (DOS, IDS etc.) mechanisms for cloud contexts ·Security for emerging cloud programming models ·Energy/cost/efficiency of security in clouds ·mOpen hardware for cloud ·Machine learning for cloud protection CCSW especially encourages novel paradigms and controversial ideas that are not on the above list. The workshop has historically acted as a fertile ground for creative debate and interaction in security-sensitive areas of computing impacted by clouds. This year marked the 13th anniversary of CCSW. In the past decade, CCSW has had a significant impact in our research community

    Interdisziplinäres Schockraum-Management unfallchirurgischer Patienten aus der Sicht der Mitarbeitenden

    Get PDF
    Zusammenfassung: Einleitung: Wir untersuchten, ob die Mitarbeiterbefragung in der Qualitätskontrolle des Schockraum-Managements von Nutzen sein kann. Methode: Konsekutive anonyme schriftliche Befragung (15Fragen, Likert-Skala 1-5) der klinisch Mitarbeitenden aller Schockraumeinsätze mit Verdacht auf Mehrfachverletzung von Juli 2002 bis Dezember 2003 (Anova; p<0,05). Ergebnisse: Bei 171 unfallchirurgischen Einsätzen retournierten 884Beteiligte den Antwortbogen. Die Beobachtungen der Mitarbeitenden hingen signifikant von der jeweiligen Schockraumsituation ab. Am meisten kritisiert wurden das Zeitmanagement und die eigene Ausbildung (Likert-Skala <4). Leitende- und Oberärzte bewerteten ihren Ausbildungsstand besser als Assistenzärzte und hatten häufiger einen ATLS®-Kurs absolviert (p<0,001). Es fanden sich signifikante systematische Unterschiede in den Beurteilungen, z.B. je nach Fachdisziplin der Antwortenden. Schlussfolgerung: Unser Fragebogen erwies sich als gut diskriminierendes Instrument und kann somit die Erfassung klinischer Parameter im Qualitätsmanagement der Schockraumphase sinnvoll ergänzen. Vor einer breiteren Anwendung werden allerdings zusätzliche Validierungs- und Korrelationsuntersuchungen benötig

    Collective Awareness for Abnormality Detection in Connected Autonomous Vehicles

    Get PDF
    The advancements in connected and autonomous vehicles in these times demand the availability of tools providing the agents with the capability to be aware and predict their own states and context dynamics. This article presents a novel approach to develop an initial level of collective awareness (CA) in a network of intelligent agents. A specific collective self-awareness functionality is considered, namely, agent-centered detection of abnormal situations present in the environment around any agent in the network. Moreover, the agent should be capable of analyzing how such abnormalities can influence the future actions of each agent . Data-driven dynamic Bayesian network (DBN) models learned from time series of sensory data recorded during the realization of tasks (agent network experiences) are here used for abnormality detection and prediction. A set of DBNs, each related to an agent , is used to allow the agents in the network to reach synchronously aware possible abnormalities occurring when available models are used on a new instance of the task for which DBNs have been learned. A growing neural gas (GNG) algorithm is used to learn the node variables and conditional probabilities linking nodes in the DBN models; a Markov jump particle filter (MJPF) is employed for state estimation and abnormality detection in each agent using learned DBNs as filter parameters. Performance metrics are discussed to asses the algorithm’s reliability and accuracy. The impact is also evaluated by the communication channel used by the network to share the data sensed in a distributed way by each agent of the network. The IEEE 802.11p protocol standard has been considered for communication among agents. Performances of the DBN-based abnormality detection models under different channel and source conditions are discussed. The effects of distances among agents and of the delays and packet losses are analyzed in different scenario categories (urban, suburban, and rural). Real data se..
    corecore